VC-Dimension of Univariate Decision Trees

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On the VC-Dimension of Univariate Decision Trees

In this paper, we give and prove lower bounds of the VC-dimension of the univariate decision tree hypothesis class. The VC-dimension of the univariate decision tree depends on the VC-dimension values of its subtrees and the number of inputs. In our previous work (Aslan et al., 2009), we proposed a search algorithm that calculates the VC-dimension of univariate decision trees exhaustively. Using...

متن کامل

Parallel univariate decision trees

Univariate decision tree algorithms are widely used in Data Mining because (i) they are easy to learn (ii) when trained they can be expressed in rule based manner. In several applications mainly including Data Mining, the dataset to be learned is very large. In those cases it is highly desirable to construct univariate decision trees in reasonable time. This may be accomplished by parallelizing...

متن کامل

VC dimension of ellipsoids

We will establish that the vc dimension of the class of d-dimensional ellipsoids is (d +3d)/2, and that maximum likelihood estimate with N -component d-dimensional Gaussian mixture models induces a geometric class having vc dimension at least N(d + 3d)/2.

متن کامل

Teaching Dimension versus VC Dimension

In this report, we give a brief survey of various results relating the Teaching Dimension and VC-Dimension. The concept of Teaching Dimension was first introduced by Goldman and Kearns, 1995 and Sinohara and Miyano, 1991. In this model, an algorithm tries to learn the hidden concept c from examples, called the teaching set, which uniquely identifies c from the rest of the concepts in the concep...

متن کامل

Inapproximability of VC Dimension and Littlestone's Dimension

We study the complexity of computing the VC Dimension and Littlestone’s Dimension. Given an explicit description of a finite universe and a concept class (a binary matrix whose (x,C)-th entry is 1 iff element x belongs to concept C), both can be computed exactly in quasipolynomial time (n ). Assuming the randomized Exponential Time Hypothesis (ETH), we prove nearly matching lower bounds on the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Neural Networks and Learning Systems

سال: 2015

ISSN: 2162-237X,2162-2388

DOI: 10.1109/tnnls.2014.2385837